Foveation-based Mechanisms Alleviate Adversarial Examples

نویسندگان

  • Yan Luo
  • Xavier Boix
  • Gemma Roig
  • Tomaso A. Poggio
  • Qi Zhao
چکیده

We show that adversarial examples, i.e. the visually imperceptible perturbations that result in Convolutional Neural Networks (CNNs) fail, can be alleviated with a mechanism based on foveations— applying the CNN in different image regions. To see this, first, we report results in ImageNet that lead to a revision of the hypothesis that adversarial perturbations are a consequence of CNNs acting as a linear classifier: CNNs act locally linearly to changes in the image regions with objects recognized by the CNN, and in other regions the CNN may act non-linearly. Then, we corroborate that when the neural responses are linear, applying the foveation mechanism to the adversarial example tends to significantly reduce the effect of the perturbation. This is because, hypothetically, the CNNs for ImageNet are robust to changes of scale and translation of the object produced by the foveation, but this property does not generalize to transformations of the perturbation. As a result, the accuracy after a foveation is almost the same as the accuracy of the CNN without the adversarial perturbation, even if the adversarial perturbation is calculated taking into account a foveation. This work was supported by the Center for Brains, Minds and Machines (CBMM), funded by NSF STC award CCF 1231216. FOVEATION-BASED MECHANISMS ALLEVIATE ADVERSARIAL EXAMPLES Yan Luo Xavier Boix Gemma Roig Tomaso Poggio Qi Zhao 1 Department of Electrical and Computer Engineering, National University of Singapore, Singapore 2 CBMM, Massachusetts Institute of Technology & Istituto Italiano di Tecnologia, Cambridge, MA

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Foveation-based Mechanisms Alleviate Adversarial Examples of Deep Neural Networks

Adversarial examples are images with visually imperceptible perturbations that 1 result in Deep Neural Networks (DNNs) fail. In this paper, we show that DNNs can 2 recover a similar level of accuracy prior to adding the adversarial perturbation, by 3 changing the image region in which the DNN is applied. This change of the input 4 image region is what we call foveation mechanism. There are many...

متن کامل

mixup: BEYOND EMPIRICAL RISK MINIMIZATION

Large deep neural networks are powerful, but exhibit undesirable behaviors such as memorization and sensitivity to adversarial examples. In this work, we propose mixup, a simple learning principle to alleviate these issues. In essence, mixup trains a neural network on convex combinations of pairs of examples and their labels. By doing so, mixup regularizes the neural network to favor simple lin...

متن کامل

mixup: Beyond Empirical Risk Minimization

Large deep neural networks are powerful, but exhibit undesirable behaviors such as memorization and sensitivity to adversarial examples. In this work, we propose mixup, a simple learning principle to alleviate these issues. In essence, mixup trains a neural network on convex combinations of pairs of examples and their labels. By doing so, mixup regularizes the neural network to favor simple lin...

متن کامل

mixup: BEYOND EMPIRICAL RISK MINIMIZATION

Large deep neural networks are powerful, but exhibit undesirable behaviors such as memorization and sensitivity to adversarial examples. In this work, we propose mixup, a simple learning principle to alleviate these issues. In essence, mixup trains a neural network on convex combinations of pairs of examples and their labels. By doing so, mixup regularizes the neural network to favor simple lin...

متن کامل

Delving into Transferable Adversarial Examples and Black-box Attacks

An intriguing property of deep neural networks is the existence of adversarial examples, which can transfer among different architectures. These transferable adversarial examples may severely hinder deep neural network-based applications. Previous works mostly study the transferability using small scale datasets. In this work, we are the first to conduct an extensive study of the transferabilit...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:
  • CoRR

دوره abs/1511.06292  شماره 

صفحات  -

تاریخ انتشار 2015